Goto

Collaborating Authors

 generative factor


A Introduction of do calculus

Neural Information Processing Systems

A Introduction of do calculus. Do-calculus consists of three rules that help with identifying causal effects. Intuitively, Rule A.1 states when an observant can be omitted in estimating the interventional Theorem B.2. Suppose that the latent variable They assume that confounders exist but they are unobservable. Adapting C-Disentanglement to existing works further improve their performance.





Mutual Information Collapse Explains Disentanglement Failure in $β$-VAEs

Vu, Minh, Wan, Xiaoliang, Wei, Shuangqing

arXiv.org Machine Learning

The $β$-VAE is a foundational framework for unsupervised disentanglement, using $β$ to regulate the trade-off between latent factorization and reconstruction fidelity. Empirically, however, disentanglement performance exhibits a pervasive non-monotonic trend: benchmarks such as MIG and SAP typically peak at intermediate $β$ and collapse as regularization increases. We demonstrate that this collapse is a fundamental information-theoretic failure, where strong Kullback-Leibler pressure promotes marginal independence at the expense of the latent channel's semantic informativeness. By formalizing this mechanism in a linear-Gaussian setting, we prove that for $β> 1$, stationarity-induced dynamics trigger a spectral contraction of the encoder gain, driving latent-factor mutual information to zero. To resolve this, we introduce the $λβ$-VAE, which decouples regularization pressure from informational collapse via an auxiliary $L_2$ reconstruction penalty $λ$. Extensive experiments on dSprites, Shapes3D, and MPI3D-real confirm that $λ> 0$ stabilizes disentanglement and restores latent informativeness over a significantly broader range of $β$, providing a principled theoretical justification for dual-parameter regularization in variational inference backbones.



e449b9317dad920c0dd5ad0a2a2d5e49-Paper.pdf

Neural Information Processing Systems

In the natural sciences, physics has found great success by describing the universe in terms of symmetry preserving transformations. Inspired by this formalism, we propose a framework, built upon the theory of group representation, for learning representations of a dynamical environment structured around the transformations that generate its evolution. Experimentally, we learn the structure of explicitly symmetric environments without supervision from observational data generated by sequential interactions.